232 research outputs found

    Any Data, Any Time, Anywhere: Global Data Access for Science

    Full text link
    Data access is key to science driven by distributed high-throughput computing (DHTC), an essential technology for many major research projects such as High Energy Physics (HEP) experiments. However, achieving efficient data access becomes quite difficult when many independent storage sites are involved because users are burdened with learning the intricacies of accessing each system and keeping careful track of data location. We present an alternate approach: the Any Data, Any Time, Anywhere infrastructure. Combining several existing software products, AAA presents a global, unified view of storage systems - a "data federation," a global filesystem for software delivery, and a workflow management system. We present how one HEP experiment, the Compact Muon Solenoid (CMS), is utilizing the AAA infrastructure and some simple performance metrics.Comment: 9 pages, 6 figures, submitted to 2nd IEEE/ACM International Symposium on Big Data Computing (BDC) 201

    Implementation and use of a highly available and innovative IaaS solution: the Cloud Area Padovana

    Get PDF
    While in the business world the cloud paradigm is typically implemented purchasing resources and services from third party providers (e.g. Amazon), in the scientific environment there's usually the need of on-premises IaaS infrastructures which allow efficient usage of the hardware distributed among (and owned by) different scientific administrative domains. In addition, the requirement of open source adoption has led to the choice of products like OpenStack by many organizations. We describe a use case of the Italian National Institute for Nuclear Physics (INFN) which resulted in the implementation of a unique cloud service, called ’Cloud Area Padovana’, which encompasses resources spread over two different sites: the INFN Legnaro National Laboratories and the INFN Padova division. We describe how this IaaS has been implemented, which technologies have been adopted and how services have been configured in high-availability (HA) mode. We also discuss how identity and authorization management were implemented, adopting a widely accepted standard architecture based on SAML2 and OpenID: by leveraging the versatility of those standards the integration with authentication federations like IDEM was implemented. We also discuss some other innovative developments, such as a pluggable scheduler, implemented as an extension of the native OpenStack scheduler, which allows the allocation of resources according to a fair-share based model and which provides a persistent queuing mechanism for handling user requests that can not be immediately served. Tools, technologies, procedures used to install, configure, monitor, operate this cloud service are also discussed. Finally we present some examples that show how this IaaS infrastructure is being used

    Severe early onset preeclampsia: short and long term clinical, psychosocial and biochemical aspects

    Get PDF
    Preeclampsia is a pregnancy specific disorder commonly defined as de novo hypertension and proteinuria after 20 weeks gestational age. It occurs in approximately 3-5% of pregnancies and it is still a major cause of both foetal and maternal morbidity and mortality worldwide1. As extensive research has not yet elucidated the aetiology of preeclampsia, there are no rational preventive or therapeutic interventions available. The only rational treatment is delivery, which benefits the mother but is not in the interest of the foetus, if remote from term. Early onset preeclampsia (<32 weeks’ gestational age) occurs in less than 1% of pregnancies. It is, however often associated with maternal morbidity as the risk of progression to severe maternal disease is inversely related with gestational age at onset2. Resulting prematurity is therefore the main cause of neonatal mortality and morbidity in patients with severe preeclampsia3. Although the discussion is ongoing, perinatal survival is suggested to be increased in patients with preterm preeclampsia by expectant, non-interventional management. This temporising treatment option to lengthen pregnancy includes the use of antihypertensive medication to control hypertension, magnesium sulphate to prevent eclampsia and corticosteroids to enhance foetal lung maturity4. With optimal maternal haemodynamic status and reassuring foetal condition this results on average in an extension of 2 weeks. Prolongation of these pregnancies is a great challenge for clinicians to balance between potential maternal risks on one the eve hand and possible foetal benefits on the other. Clinical controversies regarding prolongation of preterm preeclamptic pregnancies still exist – also taking into account that preeclampsia is the leading cause of maternal mortality in the Netherlands5 - a debate which is even more pronounced in very preterm pregnancies with questionable foetal viability6-9. Do maternal risks of prolongation of these very early pregnancies outweigh the chances of neonatal survival? Counselling of women with very early onset preeclampsia not only comprises of knowledge of the outcome of those particular pregnancies, but also knowledge of outcomes of future pregnancies of these women is of major clinical importance. This thesis opens with a review of the literature on identifiable risk factors of preeclampsia

    Search for massive resonances in dijet systems containing jets tagged as W or Z boson decays in pp collisions at √s=8 TeV

    Get PDF
    Peer reviewe

    INDIGO-DataCloud: A data and computing platform to facilitate seamless access to e-infrastructures

    Get PDF
    This paper describes the achievements of the H2020 project INDIGO-DataCloud. The project has provided e-infrastructures with tools, applications and cloud framework enhancements to manage the demanding requirements of scientific communities, either locally or through enhanced interfaces. The middleware developed allows to federate hybrid resources, to easily write, port and run scientific applications to the cloud. In particular, we have extended existing PaaS (Platform as a Service) solutions, allowing public and private e-infrastructures, including those provided by EGI, EUDAT, and Helix Nebula, to integrate their existing services and make them available through AAI services compliant with GEANT interfederation policies, thus guaranteeing transparency and trust in the provisioning of such services. Our middleware facilitates the execution of applications using containers on Cloud and Grid based infrastructures, as well as on HPC clusters. Our developments are freely downloadable as open source components, and are already being integrated into many scientific applications

    Discutindo a educação ambiental no cotidiano escolar: desenvolvimento de projetos na escola formação inicial e continuada de professores

    Get PDF
    A presente pesquisa buscou discutir como a Educação Ambiental (EA) vem sendo trabalhada, no Ensino Fundamental e como os docentes desta escola compreendem e vem inserindo a EA no cotidiano escolar., em uma escola estadual do município de Tangará da Serra/MT, Brasil. Para tanto, realizou-se entrevistas com os professores que fazem parte de um projeto interdisciplinar de EA na escola pesquisada. Verificou-se que o projeto da escola não vem conseguindo alcançar os objetivos propostos por: desconhecimento do mesmo, pelos professores; formação deficiente dos professores, não entendimento da EA como processo de ensino-aprendizagem, falta de recursos didáticos, planejamento inadequado das atividades. A partir dessa constatação, procurou-se debater a impossibilidade de tratar do tema fora do trabalho interdisciplinar, bem como, e principalmente, a importância de um estudo mais aprofundado de EA, vinculando teoria e prática, tanto na formação docente, como em projetos escolares, a fim de fugir do tradicional vínculo “EA e ecologia, lixo e horta”.Facultad de Humanidades y Ciencias de la Educació

    Using ssh as portal - The CMS CRAB over glideinWMS experience

    No full text
    The User Analysis of the CMS experiment is performed in distributed way usingboth Grid and dedicated resources. In order to insulate the users from the details of computing fabric, CMS relies on the CRAB (CMS Remote Analysis Builder) package as an abstraction layer. CMS has recently switched from a client-server version of CRAB to a purely client-based solution, with ssh being used to interface with HTCondor-based glideinWMS batch system. This switch has resulted in significant improvement of user satisfaction, as well as in significant simplification of the CRAB code base and of the operation support. This paper presents the architecture of the ssh-based CRAB package, the rationale behind it, as well as the operational experience running both the client-server and the ssh-based versions in parallel forseveral months

    CRAB: the CMS Tool to Allow Data Analysis in a distributed Environment

    No full text
    The CMS collaboration is developing a tool to allow physicists to access and analyze data stored in geographically distributed sites, simplifying the data discovery and hiding details related analysis job creation, execution and monitoring in the Grid environment. With this presentation we would like to show the progress of our work and some statistics about its usage. The CMS experiment will produce few PBytes of data each year to distribute and store in many computing centres spread in the countries participating to the CMS collaboration and made available for analysis to world-wide distributed physicists.CMS will use a distributed architecture based on Grid infrastructure to analyze data stored at remote sites, to assure data access only to authorized users and to ensure remote resources availability.Data analysis in a distributed environment is a task that assume to know which data are available, where data are stored and how to access them.To simplify analysis job creation and management the CMS collaboration is developing CRAB (CMS Remote Analysis Builder) a tool to allow users with no specific Grid knowledge to be able to run analysis in the distributed environment as data were in their local farm. CRAB is developed as tool standalone and client-server to improve the throughput, the scalability and to automatize most of CRAB functionalities. Users have to provide CRAB with the name of the dataset to analyze and the total number of events, their analysis configuration file and libraries.They must belong to the CMS Virtual Organization and have a valid Grid certificate.CRAB creates a wrapper of the analysis executable including CMS environment setup and output management.CRAB finds data location querying specific CMS catalog and splits the number of events in jobs according with data block distribution.CRAB packs the user code and send it to remote resources together with the wrapper.The job submission is done using Grid workload management commands.The resources availability, status monitoring and output retrieval of submitted jobs are fully handled by CRAB.For job submission CRAB is interfaced with gLite WMS and with OSG, based on Condor-G. CRAB uses the voms-proxy server to create the user proxy certificate and its delegation.CRAB uses the LB Api to check the status of jobs and the UI command to manage jobs. During the last year the number of users and jobs submitted via CRAB increased.This result shows that CRAB is useful to run analysis in Grid environment and the development of server-client architecture is needed to guarantee scalability.Our experience using CRAB shows some weakness of some Grid services as WMS constrains, problem with sandboxes dimension, problem with the protocol for copy the produced output to remote mass storage.Remote sites need continuous checks to guarantee availability

    Automation of user analysis workflow in CMS

    No full text
    CMS has a distributed computing model, based on a hierarchy of tiered regional computing centres. However, the end physicist is not interested in the details of the computing model nor the complexity of the underlying infrastructure, but only to access and use efficiently and easily the remote services. The CMS Remote Analysis Builder (CRAB) is the official CMS tool that allows the access to the distributed data in a transparent way. We present the current development direction, which is focused on improving the interface presented to the user and adding intelligence to CRAB such that it can be used to automate more and more the work done on behalf of user. We also present the status of deployment of the CRAB system and the lessons learnt in deploying this tool to the CMS collaboration
    corecore